Mixed membership nearest neighbor model with feature difference
نویسندگان
چکیده
Abstract The nearest neighbor model has been a popular for spatial data. This assumes distance‐based neighborhood structure among set of entities that define the observations in dataset. Values dependent variable are function corresponding values its neighbors and own explanatory variables. In this paper, we extend concept “spatial neighbors” to more general network dependency where is not necessarily distance‐based. As each entity can belong multiple groups, be related by networks. By exploring overlapping information exhibited an neighbors, modify traditional autoregressive with disturbances (SARAR) three ways. First, networks allowed. Second, differences between variables their neighboring averages applied. Third, apart from independent innovations, investigate case innovations also multiple‐network dependent. Statistical inference parameters achieved via maximizing profile log‐likelihood function. standard errors maximum likelihood estimators calculated inverse Fisher matrix. Four variations models applied predict changes stock prices 555 companies listed Hong Kong Stock Exchange. attribute differencing produces smaller out‐of‐sample mean prediction error sum squares than other models.
منابع مشابه
An Improved K-Nearest Neighbor with Crow Search Algorithm for Feature Selection in Text Documents Classification
The Internet provides easy access to a kind of library resources. However, classification of documents from a large amount of data is still an issue and demands time and energy to find certain documents. Classification of similar documents in specific classes of data can reduce the time for searching the required data, particularly text documents. This is further facilitated by using Artificial...
متن کاملAn Improved K-Nearest Neighbor with Crow Search Algorithm for Feature Selection in Text Documents Classification
The Internet provides easy access to a kind of library resources. However, classification of documents from a large amount of data is still an issue and demands time and energy to find certain documents. Classification of similar documents in specific classes of data can reduce the time for searching the required data, particularly text documents. This is further facilitated by using Artificial...
متن کاملWeightedk Nearest Neighbor Classification on Feature Projections1
H. Altay Güvenir and Aynur Akkuş Department of Computer Engineering and Information Science Bilkent University, 06533, Ankara, Turkey fguvenir, [email protected] Abstract. This paper proposes an extension to the k Nearest Neighbor algorithm on Feature Projections, called kNNFP. The kNNFP algorithm has been shown to achieve comparable accuracy with the well-known kNN algorithm. However, k...
متن کاملTarget Neighbor Consistent Feature Weighting for Nearest Neighbor Classification
We consider feature selection and weighting for nearest neighbor classifiers. Atechnical challenge in this scenario is how to cope with discrete update of nearestneighbors when the feature space metric is changed during the learning process.This issue, called the target neighbor change, was not properly addressed in theexisting feature weighting and metric learning literature. I...
متن کاملNearest neighbor classification from multiple feature subsets
Combining multiple classiiers is an eeective technique for improving accuracy. There are many general combining algorithms, such as Bagging, Boosting, or Error Correcting Output Coding, that signiicantly improve classiiers like decision trees, rule learners, or neural networks. Unfortunately, these combining methods do not improve the nearest neighbor classiier. In this paper, we present MFS, a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Forecasting
سال: 2022
ISSN: ['0277-6693', '1099-131X']
DOI: https://doi.org/10.1002/for.2882